Gen AI Data Engineer II
Dynatron is transforming the automotive service industry with intelligent SaaS solutions that deliver measurable results for thousands of dealership service departments. Our proprietary analytics, automation capabilities, and AI-powered workflows empower service leaders to increase profitability, elevate customer satisfaction, and operate with greater efficiency. With strong market traction, accelerating demand, and a rapidly expanding product ecosystem, we’re scaling fast, and we’re just getting started.
The Opportunity
We’re seeking a hands-on, forward-thinking GenAI Data Engineer to help architect and scale the next generation of AI systems powering Dynatron’s intelligent SaaS platform. In this high-impact role, you will design production-grade RAG pipelines, integrate enterprise data with leading LLMs, and build the infrastructure that enables real-time generative insights across the organization.
This is a unique opportunity to shape the technical foundation of AI innovation at a fast-growth SaaS company. If you thrive at the intersection of data engineering and applied machine learning, and you are energized by building modern GenAI capabilities from the ground up, this role offers the ability to make a meaningful and visible impact.
What You’ll Do
Engineer Generative AI Data Systems
- Design and maintain data pipelines for training, fine-tuning, and retrieval-augmented generation (RAG) use cases.
- Build ingestion frameworks using AWS Glue, Lambda, Kinesis, and Step Functions to support large-scale AI workloads.
- Develop embedding pipelines, feature stores, and vector database integrations (Pinecone, FAISS, Chroma, Amazon OpenSearch) to power semantic retrieval.
- Transform unstructured data–documents, text, images, logs–into AI-ready assets for LLM applications.
Integrate & Orchestrate LLM Architectures
- Build end-to-end GenAI pipelines connecting enterprise data with LLMs including Anthropic Claude, Amazon Titan, OpenAI GPT, and Llama 3.
- Use LangChain, LlamaIndex, and Bedrock Agents to deliver context-rich RAG, prompt-chaining, and conversational intelligence.
- Develop LLM-powered APIs enabling natural language querying, summarization, search, and generative workflows.
- Optimize prompts, context windows, model evaluation, and response quality.
Scale AI Infrastructure & MLOps
- Deploy, monitor, and optimize LLM workflows on AWS Bedrock and other cloud AI platforms.
- Implement CI/CD pipelines for GenAI systems using Airflow, Prefect, GitHub Actions, or AWS CodePipeline.
- Establish data and model observability frameworks to track drift, accuracy, latency, and performance.
- Partner with Data Science and MLOps teams to streamline fine-tuning, deployment, and scalable model operations.
Champion Governance, Security & Responsible AI
- Implement data lineage, access controls, encryption, and governance for AI datasets.
- Enforce Responsible AI practices, ensuring transparency, risk mitigation, and ethical use of LLMs.
- Maintain prompt logs, telemetry, and audit documentation supporting SOC2, GDPR, and CCPA compliance.
What You Bring
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
- 5+ years of data engineering experience, including 2+ years developing GenAI or LLM-based solutions.
- Strong proficiency in:
- AWS Bedrock, SageMaker, or Vertex AI
- LangChain or LlamaIndex
- Snowflake, Redshift, or Databricks
- Python, SQL, and API integrations
- Vector databases (Pinecone, FAISS, Chroma, OpenSearch)
- Proven experience building RAG pipelines, embeddings, and prompt-chaining architectures.
- Deep understanding of data modeling, orchestration, and MLOps best practices.
- Ability to integrate LLM capabilities into enterprise SaaS products and data platforms.
Preferred Qualifications
- Experience with advanced GenAI frameworks such as AutoGen, CrewAI, or Semantic Kernel.
- Familiarity with data observability tools (Monte Carlo, DataHub, Marquez).
- Experience with Docker, Kubernetes, and CI/CD deployment automation.
- Relevant certifications, such as:
- AWS Certified Machine Learning – Specialty
- Google Cloud Generative AI Engineer
- MIT or Stanford AI/ML Certificates
- DeepLearning.AI Generative AI Specialization
Success Measures
- Deployment of reliable GenAI pipelines supporting Dynatron’s automation and analytics initiatives.
- Improved latency, accuracy, and consistency of LLM-generated outputs.
- Reduction in hallucinations and data quality gaps across AI workflows.
- Increased adoption of AI- and LLM-enabled interfaces across Dynatron’s product ecosystem and business functions.
Compensation
- Base Salary: $110K - $135K
- Equity: Participation in Dynatron’s Equity Incentive Plan
Benefits Summary
- Comprehensive health, vision, and dental insurance
- Employer-paid short- and long-term disability and life insurance
- 401(K) with competitive company match
- Flexible vacation policy and 9 paid holidays
- Remote-first culture
Ready to Build the Future of AI at Dynatron?
Join us and help architect the intelligent systems that power smarter decisions, stronger performance, and next-generation automation across the automotive service industry.